11 research outputs found

    Categories, words and rules in language acquisition

    Get PDF
    Acquiring language requires learning a set of words (i.e. the lexicon) and abstract rules that combine them to form sentences (i.e. syntax). In this thesis, we show that infants acquiring their mother tongue rely on different speech categories to extract: words and to abstract regularities. We address this issue with a study that investigates how young infants use consonants and vowels, showing that certain computations are tuned to one or the other of these speech categories..

    The Invariance Problem in Infancy: A Pupillometry Study

    No full text
    International audienc

    Word frequency, function words and the second gavagai problem

    No full text
    International audienc

    A cross-talk between brain-damage patients and infants on action and language

    No full text
    International audienceSensorimotor representations in the brain encode the sensory and motor aspects of one's own bodily activity. It is highly debated whether sensorimotor representations are the core basis for the representation of action-related knowledge and, in particular, action words, such as verbs. In this review, we will address this question by bringing to bear insights from the study of brain-damaged patients exhibiting language disorders and from the study of the mechanisms for language acquisition in infants. Cognitive neuropsychology studies have assessed how damage to representations supporting action production impacts patients' ability to process action-related words. While correlations between verbal and nonverbal (motor) impairments are very common in patients, damage to the representations for action production can leave the ability to understand action-words unaffected: likewise, actions can still be produced successfully in cases of impaired action-word understanding. Studies with infants have evaluated the relevance of sensorimotor information when infants learn to map a novel word onto an action that they are performing or perceiving. These results demonstrate that sensorimotor information is insufficient to fully account for the complexity of verb learning: in this process, infants seem to privilege abstract constructs such as goal, intentionality and causality, as well as syntactic constraints, over the perceptual and motor dimensions of an action. Altogether, the empirical data suggest that, while not crucial for verb learning and understanding, sensorimotor processes can contribute to solving the problem of symbol grounding and/or serve as a primary mechanism in social cognition, to learn about others' goals and intentions. By assessing the relevance of sensorimotor representations in the way action-related words are acquired and represented, we aim to provide a useful set of criteria for testing specific predictions made by different theories of concepts. (C) 2012 Elsevier Ltd. All rights reserved

    Visual perception grounding of social cognition in preverbal infants

    No full text
    Social life is inherently relational, entailing the ability to recognize and monitor not only the social entities in the visual world, but also the relations between those entities. In the first months of life, visual perception already shows to privilege -i.e., to process with the highest priority and efficiency- socially relevant entities such as faces and bodies. Here, we show that within the sixth month of life, infants also discriminate between different configurations of multiple human bodies, based on the internal visuo-spatial relations between bodies, cuing, or not, interaction. We measured the differential looking times between two images of the same body dyad, differing only for the relative spatial positioning of the two bodies. Results showed that infants discriminated between face-to-face and back-to-back body dyads (Experiment 1), and treated face-to-face dyads (but not back-to-back dyads) with the same efficiency (i.e., processing speed) of single bodies (Experiment 2). Looking times for dyads with one body facing another without reciprocation, were comparable to looking times for face-to-face dyads, and differed from looking times to back-to-back dyads, suggesting a general discrimination between presence versus absence of relation (Experiment 3). Infants' discrimination of images based on relative positioning of items was selective to body dyads, and did not generalize to body-object pairs (Experiment 4). We suggest that the infants' early sensitivity to the relative positioning of bodies in a scene is a building block of social cognition, preparing the discovery of the keel and backbone of social life: relations

    Do humans really learn A(n) B-n artificial grammars from exemplars?

    No full text
    International audienceAn important topic in the evolution of language is the kinds of grammars that can be computed by humans and other animals. Fitch and Hauser (F & H; 2004) approached this question by assessing the ability of different species to learn 2 grammars, (AB)(n) and A(n) B-n. A(n) B-n was taken to indicate a phrase structure grammar, eliciting a center-embedded pattern. (AB)" indicates a grammar whose strings entail only local relations between the categories of constituents. F&H's data suggest that humans, but not tamarin monkeys, learn an A(n) B-n grammar, whereas both learn a simpler (AB)" grammar (Fitch & Hauser, 2004). In their experiments, the A constituents were syllables pronounced by a female voice, whereas the B constituents were syllables pronounced by a male voice. This study proposes that what characterizes the A(n) B-n exemplars is the distributional regularities of the syllables pronounced by either a male or a female rather than the underlying, more abstract patterns. This article replicates F&H's data and reports new controls using either categories similar to those in F&H or less salient ones. This article shows that distributional regularities explain the data better than grammar learning. Indeed, when familiarized with A(n)B(n) exemplars, participants failed to discriminate A(3)B(2) and A(2)B(3) from A(n) B-n items, missing the crucial feature that the number of As must equal the number of Bs. Therefore, contrary to F&H, this study concludes that no syntactic rules implementing embedded nonadjacent dependencies were learned in these experiments. The difference between human linguistic abilities and the putative precursors in monkeys deserves further exploration

    Spatial relations trigger visual binding of people

    No full text
    To navigate the social world, humans must represent social entities, and the relationships between those entities, starting with the spatial relationships. Recent research suggests that two bodies are processed with particularly high efficiency in visual perception, when they are in a spatial positioning that cues interaction, i.e. close and face-to-face. Socially relevant spatial relations such as facingness may facilitate visual perception by triggering grouping of bodies into a new integrated percept, which would make the stimuli more visible and easier to process. We used electroencephalography and a frequency-tagging paradigm to measure a neural correlate of grouping (or visual binding), while female and male participants saw images of two bodies face-to-face or back-to-back. The two bodies in a dyad flickered at the frequencies F1 and F2, respectively, and appeared together at a third frequency Fd (dyad frequency). This stimulation should elicit a periodic neural response for each single body at F1 and F2, and a third response at Fd, which would be larger for face-to-face ( vs . back-to-back) bodies, if those stimuli yield additional integrative processing. Results showed that responses at F1 and F2 were higher for upright than for inverted bodies, demonstrating that our paradigm could capture body-specific activity. Crucially, the response to dyads at Fd was larger for face-to-face ( vs . back-to-back) dyads, suggesting integration mediated by grouping. Thus, spatial relations that recur in social interaction (i.e., facingness) may promote binding of multiple bodies into a new representation. This mechanism can explain how the visual system contributes to integrating and transforming the representation of disconnected individual body-shapes into structured representations of social events

    Word frequency as a cue for identifying function words in infancy

    No full text
    International audienceWhile content words (e.g., 'dog') tend to carry meaning, function words (e.g., 'the') mainly serve syntactic purposes. Here, we ask whether 17-month old infants can use one language-universal cue to identify function word candidates: their high frequency of occurrence. In Experiment 1, infants listened to a series of short, naturally recorded sentences in a foreign language (i.e., in French). In these sentences, two determiners appeared much more frequently than any content word. Following this, infants were presented with a visual object, and simultaneously with a word pair composed of a determiner and a noun. Results showed that infants associated the object more strongly with the infrequent noun than with the frequent determiner. That is, when presented with both the old object and a novel object, infants were more likely to orient towards the old object when hearing a label with a new determiner and the old noun compared to a label with a new noun and the old determiner. In Experiment 2, infants were tested using the same procedure as in Experiment 1, but without the initial exposure to French sentences. Under these conditions, infants did not preferentially associate the object with nouns, suggesting that the preferential association between nouns and objects does not result from specific acoustic or phonological properties. In line with various biases and heuristics involved in acquiring content words, we provide the first direct evidence that infants can use distributional cues, especially the high frequency of occurrence, to identify potential function words. (C) 2010 Elsevier B.V. All rights reserved

    Consonants and vowels: different roles in early language acquisition

    No full text
    International audienceLanguage acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor between consonants and vowels plays a role in language acquisition. In two very similar experimental paradigms, we show that 12-month-old infants rely more on the consonantal tier when identifying words (Experiment 1), but are better at extracting and generalizing repetition-based srtuctures over the vocalic tier (Experiment 2). These results indicate that infants are able to exploit the functional differences between consonants and vowels at an age when they start acquiring the lexicon, and suggest that basic speech categories are assigned to different learning mechanisms that sustain early language acquisition

    Newborn's brain activity signals the origin of word memories

    No full text
    International audienceRecent research has shown that specific areas of the human brain are activated by speech from the time of birth. However, it is currently unknown whether newborns' brains also encode and remember the sounds of words when processing speech. The present study investigates the type of information that newborns retain when they hear words and the brain structures that support word-sound recognition. Forty-four healthy newborns were tested with the functional near-infrared spectroscopy method to establish their ability to memorize the sound of a word and distinguish it from a phonetically similar one, 2 min after encoding. Right frontal regions-comparable to those activated in adults during retrieval of verbal material-showed a characteristic neural signature of recognition when newborns listened to a test word that had the same vowel of a previously heard word. In contrast, a characteristic novelty response was found when a test word had different vowels than the familiar word, despite having the same consonants. These results indicate that the information carried by vowels is better recognized by newborns than the information carried by consonants. Moreover, these data suggest that right frontal areas may support the recognition of speech sequences from the very first stages of language acquisition
    corecore